Goto

Collaborating Authors

 gnng uard


Appendices to " GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks "

Neural Information Processing Systems

Results are shown in Table 6. T able 6: Defense performance (multi-class classification accuracy) against influence targeted attacks. Results are shown in Table 7. To evaluate how harmful non-targeted attacks can be for GNNs, we first give results without attack and under attack (without defense), i.e., "Attack" vs. "No Attack" columns The accuracy of even the strongest GNN is reduced by 18.7% on GNN if the defender is used on clean, non-attacked graphs. GNNs when they are attacked.


Defending Graph Neural Networks against Adversarial Attacks

Neural Information Processing Systems

However, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs).


all of the components ", has an interesting "idea of stabilizing training ", and "achieves state-of-the-art performance. "

Neural Information Processing Systems

We thank the reviewers for their time and valuable feedback. Below, we clarify several important points raised by the reviewers. An extra page in the final version will allow us to include the requested details. We believe these clarifications, together with new analyses, resolve all key issues raised. Rep'16] and provides a highly constraining measure of local topology.


Appendices to " GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks "

Neural Information Processing Systems

Results are shown in Table 6. T able 6: Defense performance (multi-class classification accuracy) against influence targeted attacks. Results are shown in Table 7. To evaluate how harmful non-targeted attacks can be for GNNs, we first give results without attack and under attack (without defense), i.e., "Attack" vs. "No Attack" columns The accuracy of even the strongest GNN is reduced by 18.7% on GNN if the defender is used on clean, non-attacked graphs. GNNs when they are attacked.


Defending Graph Neural Networks against Adversarial Attacks

Neural Information Processing Systems

However, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs).


all of the components ", has an interesting "idea of stabilizing training ", and "achieves state-of-the-art performance. "

Neural Information Processing Systems

We thank the reviewers for their time and valuable feedback. Below, we clarify several important points raised by the reviewers. An extra page in the final version will allow us to include the requested details. We believe these clarifications, together with new analyses, resolve all key issues raised. Rep'16] and provides a highly constraining measure of local topology.


GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Zhang, Xiang, Zitnik, Marinka

arXiv.org Machine Learning

Deep learning methods for graphs achieve remarkable performance across a variety of domains. However, recent findings indicate that small, unnoticeable perturbations of graph structure can catastrophically reduce performance of even the strongest and most popular Graph Neural Networks (GNNs). Here, we develop GNNGuard, a general algorithm to defend against a variety of training-time attacks that perturb the discrete graph structure. GNNGuard can be straight-forwardly incorporated into any GNN. Its core principle is to detect and quantify the relationship between the graph structure and node features, if one exists, and then exploit that relationship to mitigate negative effects of the attack.GNNGuard learns how to best assign higher weights to edges connecting similar nodes while pruning edges between unrelated nodes. The revised edges allow for robust propagation of neural messages in the underlying GNN. GNNGuard introduces two novel components, the neighbor importance estimation, and the layer-wise graph memory, and we show empirically that both components are necessary for a successful defense. Across five GNNs, three defense methods, and five datasets,including a challenging human disease graph, experiments show that GNNGuard outperforms existing defense approaches by 15.3% on average. Remarkably, GNNGuard can effectively restore state-of-the-art performance of GNNs in the face of various adversarial attacks, including targeted and non-targeted attacks, and can defend against attacks on heterophily graphs.